Learning as MAP Inference in Discrete Graphical Models
نویسندگان
چکیده
We present a new formulation for binary classification. Instead of relying on convex losses and regularizers such as in SVMs, logistic regression and boosting, or instead non-convex but continuous formulations such as those encountered in neural networks and deep belief networks, our framework entails a non-convex but discrete formulation, where estimation amounts to finding a MAP configuration in a graphical model whose potential functions are low-dimensional discrete surrogates for the misclassification loss. We argue that such a discrete formulation can naturally account for a number of issues that are typically encountered in either the convex or the continuous non-convex approaches, or both. By reducing the learning problem to a MAP inference problem, we can immediately translate the guarantees available for many inference settings to the learning problem itself. We empirically demonstrate in a number of experiments that this approach is promising in dealing with issues such as severe label noise, while still having global optimality guarantees. Due to the discrete nature of the formulation, it also allows for direct regularization through cardinality-based penalties, such as the `0 pseudo-norm, thus providing the ability to perform feature selection and trade-off interpretability and predictability in a principled manner. We also outline a number of open problems arising from the formulation.
منابع مشابه
Maximum Persistency via Iterative Relaxed Inference in Graphical Models
We consider the NP-hard problem of MAP-inference for undirected discrete graphical models. We propose a polynomial time and practically efficient algorithm for finding a part of its optimal solution. Specifically, our algorithm marks some labels of the considered graphical model either as (i) optimal, meaning that they belong to all optimal solutions of the inference problem; (ii) non-optimal i...
متن کاملBottom-Up Approaches to Approximate Inference and Learning in Discrete Graphical Models
OF THE DISSERTATION Bottom-Up Approaches to Approximate Inference and Learning in Discrete Graphical Models By Andrew Edward Gelfand Doctor of Philosophy in Computer Science University of California, Irvine, 2014 Professors Rina Dechter, Alexander Ihler, Co-Chairs Probabilistic graphical models offer a convenient and compact way to describe complex and uncertain relationships in data. A graphic...
متن کاملlibDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models
This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate inference methods for graphical models with discrete-valued variables. libDAI supports directed graphical models (Bayesian networks) as well as undirected ones (Markov random fields and factor graphs). It offers various approximations of the partition...
متن کاملApproximate Inference in Collective Graphical Models
We study the problem of approximate inference in collective graphical models (CGMs), which were recently introduced to model the problem of learning and inference with noisy aggregate observations. We first analyze the complexity of inference in CGMs: unlike inference in conventional graphical models, exact inference in CGMs is NP-hard even for tree-structured models. We then develop a tractabl...
متن کاملApproximate MAP Inference in Continuous MRFs
Computing the MAP assignment in graphical models is generally intractable. As a result, for discrete graphical models, the MAP problem is often approximated using linear programming relaxations. Much research has focused on characterizing when these LP relaxations are tight, and while they are relatively well-understood in the discrete case, only a few results are known for their continuous ana...
متن کامل